Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
CEUR Workshop Proceedings ; 3400:93-106, 2022.
Article in English | Scopus | ID: covidwho-20240174

ABSTRACT

In the field of explainable artificial intelligence (XAI), causal models and argumentation frameworks constitute two formal approaches that provide definitions of the notion of explanation. These symbolic approaches rely on logical formalisms to reason by abduction or to search for causalities, from the formal modeling of a problem or a situation. They are designed to satisfy properties that have been established as necessary based on the study of human-human explanations. As a consequence they appear to be particularly interesting for human-machine interactions as well. In this paper, we show the equivalence between a particular type of causal models, that we call argumentative causal graphs (ACG), and argumentation frameworks. We also propose a transformation between these two systems and look at how one definition of an explanation in the argumentation theory is transposed when moving to ACG. To illustrate our proposition, we use a very simplified version of a screening agent for COVID-19. © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0)

2.
Risks ; 11(5), 2023.
Article in English | Scopus | ID: covidwho-20235997

ABSTRACT

Predictive analytics of financial markets in developed and emerging economies during the COVID-19 regime is undeniably challenging due to unavoidable uncertainty and the profound proliferation of negative news on different platforms. Tracking the media echo is crucial to explaining and anticipating the abrupt fluctuations in financial markets. The present research attempts to propound a robust framework capable of channeling macroeconomic reflectors and essential media chatter-linked variables to draw precise forecasts of future figures for Spanish and Indian stock markets. The predictive structure combines Isometric Mapping (ISOMAP), which is a non-linear feature transformation tool, and Gradient Boosting Regression (GBR), which is an ensemble machine learning technique to perform predictive modelling. The Explainable Artificial Intelligence (XAI) is used to interpret the black-box type predictive model to infer meaningful insights. The overall results duly justify the incorporation of local and global media chatter indices in explaining the dynamics of respective financial markets. The findings imply marginally better predictability of Indian stock markets than their Spanish counterparts. The current work strives to compare and contrast the reaction of developed and developing financial markets during the COVID-19 pandemic, which has been argued to share a close resemblance to the Black Swan event when applying a robust research framework. The insights linked to the dependence of stock markets on macroeconomic indicators can be leveraged for policy formulations for augmenting household finance. © 2023 by the authors.

3.
J King Saud Univ Comput Inf Sci ; 35(7): 101596, 2023 Jul.
Article in English | MEDLINE | ID: covidwho-2328320

ABSTRACT

COVID-19 is a contagious disease that affects the human respiratory system. Infected individuals may develop serious illnesses, and complications may result in death. Using medical images to detect COVID-19 from essentially identical thoracic anomalies is challenging because it is time-consuming, laborious, and prone to human error. This study proposes an end-to-end deep-learning framework based on deep feature concatenation and a Multi-head Self-attention network. Feature concatenation involves fine-tuning the pre-trained backbone models of DenseNet, VGG-16, and InceptionV3, which are trained on a large-scale ImageNet, whereas a Multi-head Self-attention network is adopted for performance gain. End-to-end training and evaluation procedures are conducted using the COVID-19_Radiography_Dataset for binary and multi-classification scenarios. The proposed model achieved overall accuracies (96.33% and 98.67%) and F1_scores (92.68% and 98.67%) for multi and binary classification scenarios, respectively. In addition, this study highlights the difference in accuracy (98.0% vs. 96.33%) and F_1 score (97.34% vs. 95.10%) when compared with feature concatenation against the highest individual model performance. Furthermore, a virtual representation of the saliency maps of the employed attention mechanism focusing on the abnormal regions is presented using explainable artificial intelligence (XAI) technology. The proposed framework provided better COVID-19 prediction results outperforming other recent deep learning models using the same dataset.

4.
2022 IEEE International Conference on E-health Networking, Application and Services, HealthCom 2022 ; : 246-251, 2022.
Article in English | Scopus | ID: covidwho-2213190

ABSTRACT

In the current era of big data, very large amounts of data are generating at a rapid rate from a wide variety of rich data sources. Electronic health (e-health) records are examples of the big data. With the technological advancements, more healthcare practice has gradually been supported by electronic processes and communication. This enables health informatics, in which computer science meets the healthcare sector to address healthcare and medical problems. Embedded in the big data are valuable information and knowledge that can be discovered by data science, data mining and machine learning techniques. Many of these techniques apply "opaque box"approaches to make accurate predictions. However, these techniques may not be crystal clear to the users. As the users not necessarily be able to clearly view the entire knowledge discovery (e.g., prediction) process, they may not easily trust the discovered knowledge (e.g., predictions). Hence, in this paper, we present a system for providing trustworthy explanations for knowledge discovered from e-health records. Specifically, our system provides users with global explanations for the important features among the records. It also provides users with local explanations for a particular record. Evaluation results on real-life e-health records show the practicality of our system in providing trustworthy explanations to knowledge discovered (e.g., accurate predictions made). © 2022 IEEE.

5.
Studies in Computational Intelligence ; 1021:181-215, 2022.
Article in English | Scopus | ID: covidwho-1905964

ABSTRACT

This chapter presents a computational model for analysis of COVID-19 dynamic spread behavior using Explainable Artificial Intelligence—XAI. The proposed methodology consists of the recursive parametric estimation of an interval type-2 fuzzy Kalman filter for tracking and forecasting the dynamics inherent to epidemiological experimental data, using a type-2 fuzzy version of the Observer/Kalman Filter Identification (OKID) algorithm. The partitioning of the experimental dataset is obtained by formulating a type-2 fuzzy version of Gustafson-Kessel clustering algorithm. The intelligent computational model of interval type-2 fuzzy Kalman filter is updated recursively according to the unobservable components obtained by recursive spectral decomposition of epidemiological experimental data. Experimental results illustrate the efficiency and applicability of the proposed methodology for tracking and forecasting the dynamic spread behavior of novel Coronavirus 2019 in Brazil when compared to relevant approaches from the literature considering the metrics Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Root Mean Square Percentage Error (RMSPE), R 2 (coefficient of determination), Median Absolute Deviation (MAD) and Mean Absolute Percentage Error (MAPE). © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

6.
2022 International Conference on Innovations in Science, Engineering and Technology, ICISET 2022 ; : 272-277, 2022.
Article in English | Scopus | ID: covidwho-1901439

ABSTRACT

Biomedical Instrumentation is one of the fastest health emerging innovative technologies with proven contribution towards interdisciplinary medicine, it helps physicians to diagnose complex medical problems and provide treatment to patients precisely and safely. With this technological trend, explainable artificial intelligence, biomedical image processing and augmented intelligence can provide a tool that can help pediatricians, pulmonology and otolaryngology physicians, epidemiologists and pediatric practitioners to interpretably and reliably diagnose chronic and acute respiratory disorders in children, adolescents and infants. Unfortunately, the reliability of digital image processing for pulmonary disease diagnosis often depends on availability of large chest X-ray image datasets. This work presents a reliable interpretable deep transfer learning approach for pediatric pulmonary health evaluation regardless of the scarcity and limited annotated pediatric chest X-ray Image dataset sizes. This approach leverages a combination of computer vision tools and techniques to reduce child morbidity and mortality through predictive and preventive medicine with reduced surveillance risks and affordability in low resource settings. With open datasets, the deep neural networks classified the generated augmented images into 4 classes namely;Normal, Covid-19, Tuberculosis and Pneumonia at an accuracy of 97%, 97%, 70%, and 73% respectively with recall of 100% for Pneumonia and overall accuracy of 79% at only 10 epochs for both regular and transferred learning. © 2022 IEEE.

7.
8th IEEE Asia-Pacific Conference on Computer Science and Data Engineering (IEEE CSDE) ; 2021.
Article in English | Web of Science | ID: covidwho-1895891

ABSTRACT

Maternal and Neonatal health has been greatly constrained by the in-access to essential maternal health care services due to the preventive measures implemented against the spread of covid-19 hence making maternal and fetal monitoring so hard for physicians. Besides maternal toxic stress caused by fear of catching covid-19, affordable mobility of pregnant mothers to skilled health practitioners in limited resource settings is another contributor to maternal and neonatal mortality and morbidity. In this work, we leveraged existing health data to build interpretable Machine Learning (ML) models that allow physicians to offer precision maternal and fetal medicine based on biomedical signal classification results of fetal cardiotocograms (CTGs).We obtained 99%, 100% and 97% accuracy, precision and recall respectively for the LightGBM classification model without any GPU Learning resources. Then we explainably evaluated all built models with ELI5 and comprehensive feature extraction.

SELECTION OF CITATIONS
SEARCH DETAIL